This talk was recorded at NDC Manchester in Manchester, England.
#ndcmanchester #ndcconferences #developer #softwaredeveloper
Attend the next NDC conference near you:
Subscribe to our YouTube channel and learn every day:
/ @NDC
Follow our Social Media!
#ai #ml #machinelearning #testing
We often talk about bias in AI like it's something you can "fix" before launch and forget. Run a few checks, clean your data, tick some boxes, and you’re good to go—right? Not really.
In real life, AI systems keep changing after they go live. They get retrained, updated, and interact with real people in real situations. And every time we do that, there’s a chance we introduce new bias, even if we’ve done everything “right” during development. That’s why we need to stop treating bias like a one-time issue and start thinking of it as part of the whole AI lifecycle.
This talk is about how bias doesn’t just happen before deployment, it’s something that can grow during testing, retraining, and even regular use. And if we want to build fairer, more trustworthy AI, we need to look at bias as an ongoing process, not a checklist item.
I’ll also talk about why we can’t leave this work just to the AI engineers. Designers, product managers, developers, testers, legal teams, even users, all need to be part of the conversation. Because each of them sees risks, gaps, and impacts from a different angle.
We’ll look at:
- How bias creeps in after launch, even if your original mod
|
Sometimes, sharing your projects with in...
In this episode of the podcast, Cindy te...
This talk was recorded at NDC Manchester...
When Vibes Don’t Build: Why Auto-Fixing ...
This talk was recorded at NDC Manchester...
This talk was recorded at NDC Manchester...
The Trust Stack: Designing Scalable, Sec...
This talk was recorded at NDC Manchester...
How can you, as a dev, get the most out ...
Watch as we build a fully functional nat...
Explore the benefits and considerations ...